Goto

Collaborating Authors

 street map


Parallel Digital Twin-driven Deep Reinforcement Learning for User Association and Load Balancing in Dynamic Wireless Networks

Tao, Zhenyu, Xu, Wei, You, Xiaohu

arXiv.org Artificial Intelligence

Optimization of user association in a densely deployed heterogeneous cellular network is usually challenging and even more complicated due to the dynamic nature of user mobility and fluctuation in user counts. While deep reinforcement learning (DRL) emerges as a promising solution, its application in practice is hindered by high trial-and-error costs in real world and unsatisfactory physical network performance during training. In addition, existing DRL-based user association methods are usually only applicable to scenarios with a fixed number of users due to convergence and compatibility challenges. In this paper, we propose a parallel digital twin (DT)-driven DRL method for user association and load balancing in networks with both dynamic user counts, distribution, and mobility patterns. Our method employs a distributed DRL strategy to handle varying user numbers and exploits a refined neural network structure for faster convergence. To address these DRL training-related challenges, we devise a high-fidelity DT construction technique, featuring a zero-shot generative user mobility model, named Map2Traj, based on a diffusion model. Map2Traj estimates user trajectory patterns and spatial distributions solely from street maps. Armed with this DT environment, DRL agents are enabled to be trained without the need for interactions with the physical network. To enhance the generalization ability of DRL models for dynamic scenarios, a parallel DT framework is further established to alleviate strong correlation and non-stationarity in single-environment training and improve the training efficiency. Numerical results show that the proposed parallel DT-driven DRL method achieves closely comparable performance to real environment training, and even outperforms those trained in a single real-world environment with nearly 20% gain in terms of cell-edge user performance.


Map2Traj: Street Map Piloted Zero-shot Trajectory Generation with Diffusion Model

Tao, Zhenyu, Xu, Wei, You, Xiaohu

arXiv.org Artificial Intelligence

User mobility modeling serves a crucial role in analysis and optimization of contemporary wireless networks. Typical stochastic mobility models, e.g., random waypoint model and Gauss Markov model, can hardly capture the distribution characteristics of users within real-world areas. State-of-the-art trace-based mobility models and existing learning-based trajectory generation methods, however, are frequently constrained by the inaccessibility of substantial real trajectories due to privacy concerns. In this paper, we harness the intrinsic correlation between street maps and trajectories and develop a novel zero-shot trajectory generation method, named Map2Traj, by exploiting the diffusion model. We incorporate street maps as a condition to consistently pilot the denoising process and train our model on diverse sets of real trajectories from various regions in Xi'an, China, and their corresponding street maps. With solely the street map of an unobserved area, Map2Traj generates synthetic trajectories that not only closely resemble the real-world mobility pattern but also offer comparable efficacy. Extensive experiments validate the efficacy of our proposed method on zero-shot trajectory generation tasks in terms of both trajectory and distribution similarities. In addition, a case study of employing Map2Traj in wireless network optimization is presented to validate its efficacy for downstream applications.


Pix2Map: Cross-modal Retrieval for Inferring Street Maps from Images

Wu, Xindi, Lau, KwunFung, Ferroni, Francesco, Ošep, Aljoša, Ramanan, Deva

arXiv.org Artificial Intelligence

Self-driving vehicles rely on urban street maps for autonomous navigation. In this paper, we introduce Pix2Map, a method for inferring urban street map topology directly from ego-view images, as needed to continually update and expand existing maps. This is a challenging task, as we need to infer a complex urban road topology directly from raw image data. The main insight of this paper is that this problem can be posed as cross-modal retrieval by learning a joint, cross-modal embedding space for images and existing maps, represented as discrete graphs that encode the topological layout of the visual surroundings. We conduct our experimental evaluation using the Argoverse dataset and show that it is indeed possible to accurately retrieve street maps corresponding to both seen and unseen roads solely from image data. Moreover, we show that our retrieved maps can be used to update or expand existing maps and even show proof-of-concept results for visual localization and image retrieval from spatial graphs.


Inferring and Improving Street Maps with Data-Driven Automation

Communications of the ACM

Street maps help to inform a wide range of decisions. Drivers, cyclists, and pedestrians use them for search and navigation. Rescue workers responding to disasters such as hurricanes, tsunamis, and earthquakes rely on street maps to understand where people are and to locate individual buildings.23 Transportation researchers consult street maps to conduct transportation studies, such as analyzing pedestrian accessibility to public transport.25 Indeed, with the need for accurate street maps growing in importance, companies are spending hundreds of millions of dollars to map roads globally.a However, street maps are incomplete or lag behind new construction in many parts of the world. In rural Indonesia, for example, entire groups of villages are missing from OpenStreet-Map, a popular open map dataset.3 In many of these villages, the closest mapped road is miles away. In Qatar, construction of new infrastructure has boomed in preparation for the FIFA World Cup 2022.


Street-Map Based Validation of Semantic Segmentation in Autonomous Driving

von Rueden, Laura, Wirtz, Tim, Hueger, Fabian, Schneider, Jan David, Piatkowski, Nico, Bauckhage, Christian

arXiv.org Artificial Intelligence

Artificial intelligence for autonomous driving must meet strict requirements on safety and robustness, which motivates the thorough validation of learned models. However, current validation approaches mostly require ground truth data and are thus both cost-intensive and limited in their applicability. We propose to overcome these limitations by a model agnostic validation using a-priori knowledge from street maps. In particular, we show how to validate semantic segmentation masks and demonstrate the potential of our approach using OpenStreetMap. We introduce validation metrics that indicate false positive or negative road segments. Besides the validation approach, we present a method to correct the vehicle's GPS position so that a more accurate localization can be used for the street-map based validation. Lastly, we present quantitative results on the Cityscapes dataset indicating that our validation approach can indeed uncover errors in semantic segmentation masks.


Towards Map-Based Validation of Semantic Segmentation Masks

von Rueden, Laura, Wirtz, Tim, Hueger, Fabian, Schneider, Jan David, Bauckhage, Christian

arXiv.org Artificial Intelligence

Artificial intelligence for autonomous driving must meet strict requirements on safety and robustness. We propose to validate machine learning models for self-driving vehicles not only with given ground truth labels, but also with additional a-priori knowledge. In particular, we suggest to validate the drivable area in semantic segmentation masks using given street map data. We present first results, which indicate that prediction errors can be uncovered by map-based validation.


Clever Artificial Intelligence Hides Information to Cheat Later at Task

#artificialintelligence

Artificial Intelligence has become so intelligent that it is learning when to hide some information which can be used later. Research from Stanford University and Google discovered that a machine learning agent tasked with transforming aerial images into map was hiding information in order to cheat later. CycleGAN is a neural network that learns to transform images. In the early results, the machine learning agent was doing well but later when it was asked to do the reverse process of reconstructing aerial photographs from street maps it showed up information which was eliminated in the first process, TechCrunch reported. For instance, skylights on a roof that were eliminated in the process of creating a street map would reappear when the agent was asked to reverse the process.


This clever AI hid data from its creators to cheat at its appointed task

#artificialintelligence

Depending on how paranoid you are, this research from Stanford and Google will be either terrifying or fascinating. A machine learning agent intended to transform aerial images into street maps and back was found to be cheating by hiding information it would need later in "a nearly imperceptible, high-frequency signal." But in fact this occurrence, far from illustrating some kind of malign intelligence inherent to AI, simply reveals a problem with computers that has existed since they were invented: they do exactly what you tell them to do. The intention of the researchers was, as you might guess, to accelerate and improve the process of turning satellite imagery into Google's famously accurate maps. To that end the team was working with what's called a CycleGAN -- a neural network that learns to transform images of type X and Y into one another, as efficiently yet accurately as possible, through a great deal of experimentation.


Clever Artificial Intelligence hides information to cheat later at task - details inside Tech News

#artificialintelligence

California: Artificial Intelligence has become so intelligent that it is learning when to hide a some information which can be used later. A research from Stanford and Google discovered that a machine learning agent tasked with transforming aerial images into map was hiding information in order to cheat later. CycleGAN is a neural network that learns to transform images. In the early results, the machine learning agent was doing well but later when it was asked to do the reverse process of reconstructing aerial photographs from street maps it showed up information which was eliminated in the first process, TechCrunch reported. For instance, skylights on a roof that were eliminated in the process of creating a street map would reappear when the agent was asked to reverse the process.


This clever AI hid data from its creators to cheat at its appointed task

#artificialintelligence

Depending on how paranoid you are, this research from Stanford and Google will be either terrifying or fascinating. A machine learning agent intended to transform aerial images into street maps and back was found to be cheating by hiding information it would need later in "a nearly imperceptible, high-frequency signal." This occurrence reveals a problem with computers that has existed since they were invented: they do exactly what you tell them to do. The intention of the researchers was, as you might guess, to accelerate and improve the process of turning satellite imagery into Google's famously accurate maps. To that end the team was working with what's called a CycleGAN -- a neural network that learns to transform images of type X and Y into one another, as efficiently yet accurately as possible, through a great deal of experimentation.